Introduction to Open Data Science

Autumn 2017

Link to my GitHub repository
https://github.com/aadomino/IODS-project

Our era of data - larger than ever and complex like chaos - requires several skills from statisticians and other data scientists. We must discover the patterns hidden behind numbers in matrices and arrays.


We are not afraid of

We want to

  1. visualize,
  2. analyze,
  3. interpret:
    • understand
    • communicate

These are the core themes of Open Data Science and this course.


Regression and model validation

The objective of this week was learning, performing and interpreting the results of regression analysis. This part includes code, intrepretations and explanations of the results obtained with blood, sweat and tears.

1. Reading and exploring the data

The data used in this part comes from an international survey of approaches to learning - see more on this page.).

You can find the pre-processed data here: GitHub data repository.

The dataset, learning2014, consists of 166 observations and 7 variables - these are the dimensions of the data. You can see it here:

learning2014 <- read.table("C:/Users/P8Z77-V/Documents/learning2014.csv", header = TRUE, sep = "\t")
dim(learning2014)
## [1] 166   7

It is also possible to observe the data structure of the data frame:

str(learning2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: int  37 31 25 35 37 38 35 29 38 21 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...

The variables occurring in the data are gender, age, attitude, deep, stra, surf, and points.

These variables are used to categorise the answers of students from the survey. The questions pertained to the students’ assessment of their deep-, strategic-, and surface learning, and the data was also collected on their age, gender, and attitude towards statistics.

The dataset does not contain the students who received 0 points from the final exam. These results have been filtered out:

learning2014 <- dplyr::filter(learning2014, points > 0)

2. Taking a more detailed look at the data

Summary of the variables

The summary of the data includes a lot of information on all the variables. There are minimum, maximum, median and mean values of each category, and the first and the third quintiles of the data.

summary(learning2014)
##  gender       age           attitude          deep            surf      
##  F:110   Min.   :17.00   Min.   :14.00   Min.   :1.583   Min.   :1.583  
##  M: 56   1st Qu.:21.00   1st Qu.:26.00   1st Qu.:3.333   1st Qu.:2.417  
##          Median :22.00   Median :32.00   Median :3.667   Median :2.833  
##          Mean   :25.51   Mean   :31.43   Mean   :3.680   Mean   :2.787  
##          3rd Qu.:27.00   3rd Qu.:37.00   3rd Qu.:4.083   3rd Qu.:3.167  
##          Max.   :55.00   Max.   :50.00   Max.   :4.917   Max.   :4.333  
##       stra           points     
##  Min.   :1.250   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:19.00  
##  Median :3.188   Median :23.00  
##  Mean   :3.121   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :33.00

An overview of this dataset produces a few interesting observations. We can clearly see that there are more female respondents than male ones, and the age variable represents typical age distribution of university students. Most of them are in their twenties, with a few outliers with the ages from 17 to 55. The attitude variable shows that the survey participants approach statistics with a slightly more positive than negative attitude. Deep learning is favoured more than strategic or surface learning (deep, stra and surf variables). The final exam results range from 7 to 33.

A graphical overview

With a graphical plot we can actually visualise the variables and the relationships between them.

# Access the GGally and ggplot2 libraries.
library(GGally)
library(ggplot2)

# Read a plot matrix with ggpairs() into a variable p0. Draw.
p0 <- ggpairs(learning2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
p0

It is a complex plot. One thing that stands out is that there is some positive correlation between exam points received by the students and their attitude towards statistics, but the correlation between exam points and other variables is generally weak. In general, the correlations between variables in our dataset are not strong.

This strong correlation makes sense: if a student has a positive outlook on statistics (as we all do), they are more likely to learn and obtain good results on the final test. It is possible to visualise this particular relation in more detail. Here, it is done by plotting the variables attitude and points as a scatterplot. The colour codes gender. Regression lines are also included:

# Access the ggplot2 library.
library(ggplot2)

# Draw the plot (p1) with our data. Define the mapping. Define the visualization type (dots) and smoothing. Add the plot title.
p1 <- ggplot(learning2014, aes(x = attitude, y = points, col = gender)) + geom_point() + geom_smooth(method = "lm") + ggtitle("Students' attitude towards statistics vs final exam points")
p1

3. Choosing and fitting a regression model

In this part we will choose and fit a suitable regression model, which will explain the data in more detail - we want to find out which factors influence the amount of exam points received. Points is the target (dependent) variable. The previous section shows that attitude, stra and surf correlate most strongly with points. They will be our explanatory variables in this model, the summary of which is printed out below:

# Fit a regression model (m0) with multiple explanatory variables: attitude, stra, surf. Print a summary of the model.
m0 <- lm(points ~ attitude + stra + surf, data = learning2014)
summary(m0)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.01711    3.68375   2.991  0.00322 ** 
## attitude     0.33952    0.05741   5.913 1.93e-08 ***
## stra         0.85313    0.54159   1.575  0.11716    
## surf        -0.58607    0.80138  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

The results show the variables used in the model.

Residuals are assumed to be normally distributed with zero mean and constant variance. The median is indeed close to zero and the residuals seem to follow normal distribution.

Coefficients show estimated influence of the explanatory variables on the target variable - the logistic probability of the outcome for a change by one unit in explanatory variable. In other words, here, if attitude increases by 1, the logistic odds of better test points increase by 0.33952. The more excited the students are about the subject, the better chances they have to pass the final with flying colours.

The summary also shows standard error, t- and p-values and indicates the significance values. The effect of attitude on the dependent variable (exam points) is statistically significant, while stra and surf are not (p value over .5) If an explanatory variable in the model does not have a statistically significant relationship with the target variable, we remove the variable from the model and fit the model again without it. In this summary, the residuals’ median has decreased and attitude is highly statistically significant.

# Create a regression model m1 with only attitude. Print a summary of the model.
m1 <- lm(points ~ attitude, data = learning2014)
summary(m1)
## 
## Call:
## lm(formula = points ~ attitude, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.63715    1.83035   6.358 1.95e-09 ***
## attitude     0.35255    0.05674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

4. Interpreting the results

The second summary indicates that the estimated effect of students’ attitude on exam results is 3.5255. Again, this means that for each unit increase in attitude, the exam results are also expected to increase.

The multiple R-squared value evaluates how much of the changes (variance) of the target variable is explained by the model. The rest of the variance is explained by some other factors that are not included. It could be understood as a goodness of fit measure.

The multiple R-squared is higher in the first model, even though the explanatory variables were shown to be statistically not significant and were subsequently dropped. The value increases when any variables are added to the model, irrespective of their significance.

5. Creating diagnostic plots

Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage:

# Diagnostic plots using the plot() function. Choose the plots 1, 2 and 5.
par(mfrow = c(1,1))
plot(m1, which = c(1,2,5))

These plots allow us to assess if some of the assumptions we made about our linear regression model are correct.

The first plot, residuals vs fits, is a scatter plot of residuals on the y axis and fitted values (estimated responses) on the x axis. The plot is used to detect non-linearity, unequal error variances, and outliers - as simply explained here.

Our plot seems to show that residuals and the fitted values are uncorrelated, just as they should be in a linear model with normally distributed errors and constant variance. In other words, the scatter plot confirms our assumption about the error distribution and variance. Great.

The second plot is a Q-Q plot (quantile-quantile plot), which is used to assess if the target variable we took from our dataset really has the distribution we assumed in our model, which, for us, is a normal distribution. (A great source on interpreting this kind of plots can be found here).

A Q-Q plot is a scatterplot created by plotting two sets of quantiles against one another. If both sets of quantiles came from the same distribution, we should see the points forming a line that’s roughly straight.

Our plot indeed forms a straight line. Assumption confirmed.

The third plot, Residuals vs Leverage, allows us to see if the extreme values in the data influence the regression line, i.e. if the fact that we include them in our dataset influences the overall results.

The patterns in this plot are not really relevant. There are 2 things to look for: + outlying values at the upper right corner or at the lower right corner - values far away from the rest of the data points, + cases outside of the dashed red line (Cook’s distance).

In our plot, we have no influential cases. The Cook’s distance lines are not even visible, which means that all our data fits well within the lines. There are no extreme values. The plot is actually typical for the datasets with no influential cases.

Finally, some more reading I enjoyed on the subject of diagnostic plots.



Clustering and classification

A lot of fun, surely.

1. Data

Tasks 1-3.

First, access the necessary libraries.

# Access the needed libraries:
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
## 
##     nasa
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(tidyr)
library(ggplot2)
library(boot)
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
library(tidyverse)
## -- Attaching packages -------------------------------------- tidyverse 1.2.1 --
## <U+221A> tibble  1.3.4     <U+221A> purrr   0.2.4
## <U+221A> readr   1.1.1     <U+221A> stringr 1.2.0
## <U+221A> tibble  1.3.4     <U+221A> forcats 0.2.0
## -- Conflicts ----------------------------------------- tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
## x MASS::select()  masks dplyr::select()
library(corrplot)
## corrplot 0.84 loaded

Let’s load the Boston data from the MASS package and explore the structure and the dimensions of the data and describe the dataset.

# load the data
data("Boston")

# explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

The Boston data frame has 506 rows and 14 columns. It describes housing values in the suburbs of Boston.

What are the variables in the data?

colnames(Boston)
##  [1] "crim"    "zn"      "indus"   "chas"    "nox"     "rm"      "age"    
##  [8] "dis"     "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"

The descriptions of the variables are available here. They concern such things as per capita crime rate by town, average number of rooms per dwelling, or even pupil-teacher ratio by town.

Now let’s have a look at a graphical overview of the data and show summaries of the variables in the data.

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

From the summary of the variables we can see minimum, maximum, median and mean values as well as the 1st and 3rd quartiles of the variables.

The correlations between the different variables can be studied with the help of a correlations matrix and a correlations plot.

# First calculate the correlation matrix and round it so that it includes only two digits:
cor_matrix<-cor(Boston) %>% round(digits = 2)

# Print the correlation matrix:
cor_matrix
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47
##         ptratio black lstat  medv
## crim       0.29 -0.39  0.46 -0.39
## zn        -0.39  0.18 -0.41  0.36
## indus      0.38 -0.36  0.60 -0.48
## chas      -0.12  0.05 -0.05  0.18
## nox        0.19 -0.38  0.59 -0.43
## rm        -0.36  0.13 -0.61  0.70
## age        0.26 -0.27  0.60 -0.38
## dis       -0.23  0.29 -0.50  0.25
## rad        0.46 -0.44  0.49 -0.38
## tax        0.46 -0.44  0.54 -0.47
## ptratio    1.00 -0.18  0.37 -0.51
## black     -0.18  1.00 -0.37  0.33
## lstat      0.37 -0.37  1.00 -0.74
## medv      -0.51  0.33 -0.74  1.00
# Visualize the correlation matrix with a correlations plot:
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)

From the plot above we can easily see which variables correlate with which and is that correlation positive (blue) or negative (red). Some observations:

  • accessibility to radial highways (rad) is highly positively correlated with property taxes (tax), as are industrial land use (indus) and air pollution (nox) - land next to the roads is taxed higher because of higher profit posiibilities, there is more industrial plots and obviously air pollution levels are up because of the cars;
  • the distance from the employment centres (dis) correlates negatively with age of the houses (age), nitrogen oxides concentration (nox) and proportion of non-retail business acres (indus) - there are newer houses farther away from the centrally-located workplaces, there is less air pollution in the suburbs, and less land is devoted to businesses other than local retail shops;
  • Charles river dummy variable (chas) does not correlate with any other variable, as is to be expected.

2. Standardization and scaling of the data

Task 4

In this part, we are performing the following: * Standardize the dataset and print out summaries of the scaled data. * Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). * Use the quantiles as the break points in the categorical variable. * Drop the old crime rate variable from the dataset. * Divide the dataset to train and test sets, so that 80% of the data belongs to the train set.

Let’s standardize the dataset and print out summaries of the scaled data for the later classification and clustering analysis. How did the variables change?

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865

The variables are more similar in scale and weight, which makes them easier to compare and estimate. They also all have mean zero.

Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). This variable shows the quantiles of the scaled crime rate and is now used instead of the previous continuous one.

# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127

Let’s drop the old crime rate variable from the dataset and replace it with the new categorical variable for crime rates - for clarity:

# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

Finally, the last step. 80 % of the data will become the training (train) set and the 20 % the test set. The actual predictions of new data are done with the test set.

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

3. Linear discriminant analysis

Tasks 5 and 6

Now let’s fit the linear discriminant analysis on the train set. LDA is a generalization of Fisher’s linear discriminant, a method used in statistics, pattern recognition and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events (as explained by everyone’s fav source).

We will use the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables.

# linear discriminant analysis
lda.fit <- lda(crime ~., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2574257 0.2376238 0.2475248 0.2574257 
## 
## Group means:
##                   zn      indus        chas        nox          rm
## low       0.99181488 -0.9426166 -0.08304540 -0.8832579  0.49769998
## med_low  -0.08057735 -0.3411370 -0.06727176 -0.5707191 -0.09009324
## med_high -0.37404455  0.1884482  0.16075196  0.3996882  0.09757179
## high     -0.48724019  1.0149946 -0.08304540  1.0472511 -0.32162685
##                 age        dis        rad        tax    ptratio
## low      -0.8915819  0.9048202 -0.6925458 -0.7397175 -0.4582434
## med_low  -0.4191964  0.3564509 -0.5404292 -0.4789849 -0.1112962
## med_high  0.3976040 -0.3522815 -0.3927076 -0.2925003 -0.2251944
## high      0.8005209 -0.8429654  1.6596029  1.5294129  0.8057784
##                black       lstat        medv
## low       0.37605438 -0.79011979  0.59172905
## med_low   0.30704458 -0.20409266  0.04887249
## med_high  0.09709822 -0.01572081  0.15517859
## high     -0.84492697  0.91638421 -0.72149786
## 
## Coefficients of linear discriminants:
##                 LD1          LD2         LD3
## zn       0.13181052  0.619759127 -1.00009922
## indus   -0.01525665 -0.337139130  0.28974144
## chas    -0.07132199 -0.013696253  0.06391731
## nox      0.27450503 -0.844726043 -1.38746530
## rm      -0.06164374 -0.033332984 -0.16437531
## age      0.37823351 -0.340001847 -0.34397442
## dis     -0.08140883 -0.283045969 -0.02660027
## rad      3.23879802  0.885333550 -0.12433654
## tax     -0.02565999  0.105544195  0.60043512
## ptratio  0.16349411 -0.001335771 -0.41852012
## black   -0.14216284 -0.030679533  0.06872368
## lstat    0.17652029 -0.102101244  0.38921898
## medv     0.13948669 -0.337973250 -0.22677602
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9512 0.0363 0.0125

The LDA calculates the probability of a new observation being classified as belonging to each class on the basis of the trained model, and assigns every observation to the most probable class.

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 4)

Biplot is a visualisation chart that allows that allows us to clearly see some of the most outstanding or clear predictor vairables. It is clearly visible that accessibility to radial highways - rad - is the variable that is the most telling.

In order to assess the performance of the model in predicting the crime rate, let’s save the crime categories from the test set and then remove the categorical crime variable from the test dataset…

# save the correct classes from test data
correct_classes <- test$crime

# remove the crime variable from test data
test <- dplyr::select(test, -crime)

…and then predict the classes with the LDA model on the test data with the predict() function, and cross tabulate the results with the crime categories from the test set:

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       13       9        1    0
##   med_low    2      20        8    0
##   med_high   0       6       20    0
##   high       0       0        1   22

The corss tabulation of the results tells us that the model predicts crime rate in the suburbs correctly (which is to be expected, since it was such a telling feature previously); the model has some problems in separating med_low from low, but overall it performs really well.

4. K-means clustering

Task 7

It’s time for data clustering. Let’s reload the Boston dataset and standardize it.

# center and standardize variables
boston_scaled <- scale(Boston)

# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

The next step is to calculate the (Euclidean) distances between the observations, and to do that we’ll use a Euclidean distance matrix:

# euclidean distance matrix
dist_eu <- dist(Boston)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.119  85.624 170.539 226.315 371.950 626.047

Now let’s perform the K-means clustering with K=3 and have a look at the plot (the last 5 columns):

# k-means clustering
km <-kmeans(Boston, centers = 3)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

But is it optimal? How do we know what the optimal amount of clusters is?

Let’s take the within cluster sum of squares (WCSS) and look at the changes in it depending on the number of clusters. The optimal number of clusters shows as a sharp drop in total WCSS.

set.seed(123)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

The optimal number of cluster seems to be 2, so let’s use that:

# k-means clustering
km <-kmeans(Boston, centers = 2)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

We can also have a look at other columns:

pairs(Boston[7:14], col = km$cluster)

Again it looks like the same variables as before are the most distinctive: access to highways and property tax.

5. Bonus

Actually the super-bonus exercise, because it’s worth more points.

Run the code below for the (scaled) train data that you used to fit the LDA. The code creates a matrix product, which is a projection of the data points.

model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

Next, install and access the plotly package. Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.

# access the needed libraries:
library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')

Adjust the code: add argument color as a argument in the plot_ly() function. Set the color to be the crime classes of the train set.

plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)

Draw another 3D plot where the color is defined by the clusters of the k-means.

plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = km$centers)

Hmm. This is difficult to interpret?